43 research outputs found

    Synthesizing mood-affected signed messages: Modifications to the parametric synthesis

    Full text link
    This is the author’s version of a work that was accepted for publication in International Journal of Human-Computer Studies. Changes resulting from the publishing process, such as peer review, editing, corrections, structural formatting, and other quality control mechanisms may not be reflected in this document. Changes may have been made to this work since it was submitted for publication. A definitive version was subsequently published in International Journal of Human-Computer Studies,70, 4 (2012) DOI: 10.1016/j.ijhcs.2011.11.003This paper describes the first approach in synthesizing mood-affected signed contents. The research focuses on the modifications applied to a parametric sign language synthesizer (based on phonetic descriptions of the signs). We propose some modifications that will allow for the synthesis of different perceived frames of mind within synthetic signed messages. Three of these proposals focus on modifications to three different signs' phonologic parameters (the hand shape, the movement and the non-hand parameter). The other two proposals focus on the temporal aspect of the synthesis (sign speed and transition duration) and the representation of muscular tension through inverse kinematics procedures. These resulting variations have been evaluated by Spanish deaf signers, who have concluded that our system can generate the same signed message with three different frames of mind, which are correctly identified by Spanish Sign Language signers

    Spanish Sign Language synthesis system

    Full text link
    This is the author’s version of a work that was accepted for publication in Journal of Visual Languages and Computing. Changes resulting from the publishing process, such as peer review, editing, corrections, structural formatting, and other quality control mechanisms may not be reflected in this document. Changes may have been made to this work since it was submitted for publication. A definitive version was subsequently published in Journal of Visual Languages and Computing,23, 3, (2012) DOI: 10.1016/j.jvlc.2012.01.003This work presents a new approach to the synthesis of Spanish Sign Language (LSE). Its main contributions are the use of a centralized relational database for storing sign descriptions, the proposal of a new input notation and a new avatar design, the skeleton structure of which improves the synthesis process. The relational database facilitates a highly detailed phonologic description of the signs that include parameter synchronization and timing. The centralized database approach has been introduced to allow the representation of each sign to be validated by the LSE National Institution, FCNSE. The input notation, designated HLSML, presents multiple levels of abstraction compared with current input notations. Redesigned input notation is used to simplify the description and the manual definition of LSE messages. Synthetic messages obtained using our approach have been evaluated by deaf users; in this evaluation a maximum recognition rate of 98.5% was obtained for isolated signs and a recognition rate of 95% was achieved for signed sentences

    The synthesis of LSE classifiers: From representation to evaluation

    Full text link
    This work presents a first approach to the synthesis of Spanish Sign Language's (LSE) Classifier Constructions (CCs). All current attempts at the automatic synthesis of LSE simply create the animations corresponding to sequences of signs. This work, however, includes the synthesis of the LSE classification phenomena, defining more complex elements than simple signs, such as Classifier Predicates, Inflective CCs and Affixal classifiers. The intelligibility of our synthetic messages was evaluated by LSE natives, who reported a recognition rate of 93% correct answers

    Hybrid paradigm for Spanish Sign Language synthesis

    Full text link
    The final publication is available at Springer via http://dx.doi.org/10.1007/s10209-011-0245-9This work presents a hybrid approach to sign language synthesis. This approach allows the hand-tuning of the phonetic description of the signs, which focuses on the time aspect of the sign. Therefore, the approach retains the capacity for the performing of morpho-phonological operations, like notation-based approaches, and improves the synthetic signing performance, such as the hand-tuned animations approach. The proposed approach simplifies the input message description using a new high-level notation and storage of sign phonetic descriptions in a relational database. Such relational database allows for more flexible sign phonetic descriptions; it also allows for a description of sign timing and the synchronization between sign phonemes. The new notation, named HLSML, is a gloss-based notation focusing on message description in it. HLSML introduces several tags that allow for the modification of the signs in the message that defines dialect and mood variations, both of which are defined in the relational database, and message timing, including transition durations and pauses. A new avatar design is also proposed that simplifies the development of the synthesizer and avoids any interference with the independence of the sign language phonemes during animation. The obtained results showed an increase of the sign recognition rate compared to other approaches. This improvement was based on the active role that the sign language experts had in the description of signs, which was the result of the flexibility of the sign storage approach. The approach will simplify the description of synthesizable signed messages, thus facilitating the creation of multimedia-signed contents

    Sintetizador paramétrico multidispositivo de lengua de signos española

    Full text link
    Tesis doctoral inédita. Universidad Autónoma de Madrid, Escuela Politécnica Superior, septiembre de 200

    An on-line system adding subtitles and sign language to Spanish audio-visual content

    Full text link
    Deaf people cannot properly access the speech information stored in any kind of recording format (audio, video, etc). We present a system that provides with subtitling and Spanish Sign Language representation capabilities to allow Spanish Deaf population can access to such speech content. The system is composed by a speech recognition module, a machine translation module from Spanish to Spanish Sign Language and a Spanish Sign Language synthesis module. On the deaf person side, a user-friendly interface with subtitle and avatar components allows him/her to access the speech information

    Modeling of power converters for debugging digital controllers through FPGA emulation

    Full text link
    Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works. F. Lopez-Colino, A. Sanchez, A. de Castro, and J. Garrido, "Modeling of power converters for debugging digital controllers through FPGA emulation", in 15th European Conference on Power Electronics and Applications (EPE), 2013.Debugging a digital controller for power converters can be a lengthy process due to the long time required in mixed-signal simulations. This paper focuses on the design of a power converter model for debugging digital controllers in closed loop. The testing may be performed by means of simulation or emulation. This paper shows the results of simulating and emulating the power converter using different data representations. Experiments will show that through a good selection of data and emulation, testing can be speeded up over 28,000 times.This work has been partially supported by the Spanish Ministerio de Ciencia e Innovacion under project TEC2009-09871

    A rule-based translation from written Spanish to Spanish Sign Language glosses

    Full text link
    This is the author’s version of a work that was accepted for publication in Computer Speech and Language. Changes resulting from the publishing process, such as peer review, editing, corrections, structural formatting, and other quality control mechanisms may not be reflected in this document. Changes may have been made to this work since it was submitted for publication. A definitive version was subsequently published in Computer Speech and Language, 28, 3 (2015) DOI: 10.1016/j.csl.2013.10.003One of the aims of Assistive Technologies is to help people with disabilities to communicate with others and to provide means of access to information. As an aid to Deaf people, we present in this work a production-quality rule-based machine system for translating from Spanish to Spanish Sign Language (LSE) glosses, which is a necessary precursor to building a full machine translation system that eventually produces animation output. The system implements a transfer-based architecture from the syntactic functions of dependency analyses. A sketch of LSE is also presented. Several topics regarding translation to sign languages are addressed: the lexical gap, the bootstrapping of a bilingual lexicon, the generation of word order for topic-oriented languages, and the treatment of classifier predicates and classifier names. The system has been evaluated with an open-domain testbed, reporting a 0.30 BLEU (BiLingual Evaluation Understudy) and 42% TER (Translation Error Rate). These results show consistent improvements over a statistical machine translation baseline, and some improvements over the same system preserving the word order in the source sentence. Finally, the linguistic analysis of errors has identified some differences due to a certain degree of structural variation in LSE

    Distributed Spanish Sign Language synthesizer architectures

    Full text link
    This is an electronic version of the paper presented at the Congreso Internacional de Interacción Persona-Ordenador, held in Bercelona on 2009This work presents the design of a distributed Sign Language synthesis architecture. The main objective of this design is to adapt the synthesis process to the diversity of the user devices. The synthesis process has been divided into several independent modules that can be executed either in a synthesis server or in the client device. Depending on the modules assigned to the server or the client, four different scenarios have been defined. These scenarios may vary from a heavy client design which executes the whole synthesis, to a light client design similar to a video player. These four scenarios will provide the maximum signed message quality independently of the device hardware resource

    Plataforma de Interacción vocal de propósito general basada en SIP y VXML

    Full text link
    Versión electrónica de la ponencia presentada en Conferencia Ibero-Americana IADIS WWW/Internet 2006, celebrado en Murcia en 2006A continuación describimos nuestra plataforma vocal, que ha sido desarrollada por el HCTLab con el objetivo de posibilitar la creación de aplicaciones de respuesta automática para la interacción con el usuario a través de la voz, ya sea por medio de un teléfono convencional, un teléfono IP, o bien a través de cualquier agente de usuario SIP (Session Initiation Protocol). Esta plataforma está basada en SIP, VoiceXML (lenguaje de marcado que permite la creación de interfaces vocales con el usuario) y CCXML (lenguaje de marcado para la creación de aplicaciones de control de llamada avanzado) así como otros estándares relacionados con interacción hombre máquina a través de la voz propuestos por la W3C. También describimos la forma en la que se han seleccionado e integrado los componentes software, todos ellos de código abierto, a partir de los cuales se ha construido, así como los módulos de los que consta y los flujos de datos y control que intercambian entre sí en los diferentes escenarios de ejecución. Finalmente hablaremos de aspectos de usabilidad, rendimiento y utilidad de la plataforma como tecnología base para nuestra línea de investigación actual
    corecore